DiscoverPivotOpenAI CTO Mira Murati on ScarJo Controversy, Sam Altman & Disinfo Fears | On With Kara Swisher
OpenAI CTO Mira Murati on ScarJo Controversy, Sam Altman & Disinfo Fears | On With Kara Swisher

OpenAI CTO Mira Murati on ScarJo Controversy, Sam Altman & Disinfo Fears | On With Kara Swisher

Update: 2024-07-052
Share

Digest

Kara Swisher interviews Mira Murati, OpenAI's CTO, about the company's recent partnership with Apple to integrate ChatGPT into their devices. Murati emphasizes the importance of privacy and trust in this collaboration, highlighting OpenAI's commitment to building technologies that users feel confident about. The conversation then delves into the complex issue of AI-powered disinformation, particularly in the context of the upcoming presidential election. Murati acknowledges the potential risks but emphasizes OpenAI's efforts to mitigate them through measures like improving accuracy detection, reducing political bias, and providing accurate voting information. She also discusses the company's approach to data acquisition, emphasizing the importance of partnerships with publishers and content creators. The interview concludes with a discussion about the future of artificial general intelligence (AGI), Murati's perspective on the company's safety culture, and her hopes for the positive impact of AI in areas like education and healthcare.

Outlines

00:00:00
Introduction and Partnership with Apple

This Chapter introduces the episode and highlights OpenAI's new partnership with Apple to integrate ChatGPT into their devices. Mira Murati, OpenAI's CTO, emphasizes the importance of privacy and trust in this collaboration, highlighting OpenAI's commitment to building technologies that users feel confident about.

00:03:58
AI-Powered Disinformation and Election Integrity

This Chapter delves into the complex issue of AI-powered disinformation, particularly in the context of the upcoming presidential election. Murati acknowledges the potential risks but emphasizes OpenAI's efforts to mitigate them through measures like improving accuracy detection, reducing political bias, and providing accurate voting information.

00:10:52
Data Acquisition and Transparency

This Chapter focuses on OpenAI's approach to data acquisition, emphasizing the importance of partnerships with publishers and content creators. Murati discusses the company's efforts to ensure accuracy and provide compensation for content creators, addressing concerns about data ownership and rights.

00:17:25
The Future of Artificial General Intelligence (AGI)

This Chapter explores the future of artificial general intelligence (AGI), with Murati discussing OpenAI's internal roadmap and her perspective on the significance of AGI. She emphasizes the importance of evaluating and forecasting the real-world impact of AI, both societal and economic.

00:27:50
Safety Culture and OpenAI's Internal Dynamics

This Chapter delves into OpenAI's safety culture and the internal dynamics of the company. Murati addresses concerns about the company's focus on product development over safety, highlighting the importance of open debate and addressing concerns about potential risks.

00:36:00
Mira Murati's Role and Relationship with Sam Altman

This Chapter focuses on Mira Murati's role at OpenAI and her relationship with Sam Altman, the company's CEO. Murati discusses her experience as CEO during a brief period when Altman was fired and then reinstated, emphasizing the importance of putting the mission and team first.

00:41:05
AI's Impact on Elections and Disinformation

This Chapter examines the potential impact of AI on elections and disinformation. Murati discusses OpenAI's efforts to mitigate these risks, including measures to prevent abuse, reduce political bias, and provide accurate voting information.

00:46:32
Conclusion: Risks and Promise of AI

This Chapter concludes the episode with Murati's reflections on the risks and promise of AI. She emphasizes the importance of shared responsibility, understanding the technology, and mitigating risks through collaboration with experts, civil society, and governments.

Keywords

OpenAI


OpenAI is a research and deployment company focused on developing and promoting friendly artificial intelligence. Founded in 2015 by a group of prominent figures in the tech industry, including Elon Musk, Sam Altman, and Greg Brockman, OpenAI aims to ensure that artificial general intelligence benefits all of humanity. The company has gained significant attention for its groundbreaking work in large language models, particularly ChatGPT, which has revolutionized the field of natural language processing.

ChatGPT


ChatGPT is a large language model developed by OpenAI. It is a powerful AI system capable of generating human-like text, translating languages, writing different kinds of creative content, and answering your questions in an informative way. ChatGPT has become a popular tool for various applications, including customer service, content creation, and education. Its ability to engage in natural conversations and provide insightful responses has made it a significant advancement in the field of AI.

Artificial General Intelligence (AGI)


Artificial general intelligence (AGI) refers to a hypothetical type of artificial intelligence that possesses the ability to understand and reason like a human being. Unlike narrow AI, which is designed for specific tasks, AGI would be capable of performing any intellectual task that a human can. The development of AGI is a long-term goal in the field of AI, with significant implications for society and the future of work.

Disinformation


Disinformation refers to the deliberate spread of false or misleading information, often with the intention to deceive or manipulate. In the digital age, disinformation has become a significant problem, particularly on social media platforms. AI-powered disinformation poses a growing threat, as it can be used to create highly convincing fake content, such as deepfakes, and spread it widely and effectively.

Deepfake


A deepfake is a synthetic media, typically a video or audio recording, that has been manipulated to depict a person saying or doing something that they did not actually say or do. Deepfakes are created using advanced AI techniques, such as deep learning, and can be incredibly realistic, making it difficult to distinguish them from genuine content. Deepfakes have raised concerns about their potential for misuse, including spreading misinformation, damaging reputations, and creating social unrest.

Privacy


Privacy refers to the right of individuals to control their personal information and how it is used. In the context of AI, privacy concerns arise from the vast amounts of data that are collected and used to train AI models. There are concerns about the potential for misuse of this data, including identity theft, discrimination, and surveillance. Ensuring privacy is crucial for building trust in AI technologies and protecting individual rights.

Election Integrity


Election integrity refers to the assurance that elections are conducted fairly and accurately, with all eligible voters having the opportunity to cast their ballots and have their votes counted correctly. In the digital age, election integrity is increasingly threatened by disinformation, foreign interference, and cyberattacks. Ensuring election integrity is essential for maintaining public trust in democratic processes and ensuring that the will of the people is reflected in the outcome of elections.

Stanford Institute for Human-Centered AI


The Stanford Institute for Human-Centered AI (HAI) is a research institute at Stanford University dedicated to advancing artificial intelligence in a way that benefits humanity. Founded in 2019, HAI brings together researchers, students, and industry leaders to address the ethical, societal, and technical challenges of AI. The institute's mission is to ensure that AI is developed and deployed responsibly, with a focus on human values and well-being.

Content Media Manager


Content Media Manager is a tool developed by OpenAI to help identify and manage the types of data used to train its AI models. This tool aims to improve transparency and accountability in data acquisition, allowing OpenAI to better understand the provenance of the data and ensure that it is used responsibly. The Content Media Manager is part of OpenAI's efforts to address concerns about data ownership and rights, particularly in the context of partnerships with publishers and content creators.

Voice Engine


Voice Engine is a technology developed by OpenAI that can recreate someone's voice using only a short audio recording. This technology has the potential for both positive and negative applications. While it could be used for entertainment purposes, such as creating voiceovers or dubbing, it also raises concerns about its potential for misuse, including creating deepfakes and spreading disinformation.

Q&A

  • What are some of the key concerns about AI-powered disinformation, especially in the context of the upcoming presidential election?

    Mira Murati highlights the potential for AI to be used to manipulate people's beliefs and influence their actions, particularly in the context of elections. She acknowledges the risks of AI-powered disinformation, including the creation of highly convincing fake content and the spread of misleading information.

  • How is OpenAI addressing the issue of AI-powered disinformation?

    OpenAI is taking a multi-pronged approach to mitigate the risks of AI-powered disinformation. They are focusing on improving accuracy detection, reducing political bias in their models, and providing accurate voting information. They are also exploring technical solutions like watermarking and classifiers to help users identify deepfakes.

  • What is OpenAI's approach to data acquisition, and how do they address concerns about data ownership and rights?

    OpenAI acquires data from three main sources: publicly available data, partnerships with publishers, and human laborers. They emphasize the importance of partnerships with publishers and content creators, ensuring accuracy and exploring ways to compensate them for their data. They are also developing tools like the Content Media Manager to improve transparency and accountability in data acquisition.

  • What is OpenAI's perspective on the future of artificial general intelligence (AGI)?

    Mira Murati believes that AGI is a significant goal in the field of AI, but she emphasizes the importance of evaluating and forecasting the real-world impact of AI, both societal and economic. She believes that the definition of intelligence will continue to evolve, and that the focus should be on assessing the impact of AI on society.

  • How does OpenAI address concerns about its safety culture and the potential for product development to overshadow safety considerations?

    Murati acknowledges that there have been concerns about OpenAI's focus on product development over safety. She emphasizes the importance of open debate and addressing concerns about potential risks. She also highlights the company's commitment to safety, pointing to the rigorous safety systems and processes in place.

  • What is Mira Murati's perspective on her relationship with Sam Altman, OpenAI's CEO?

    Murati describes her relationship with Altman as a strong partnership, despite their differences in approach. She acknowledges that they have disagreements but emphasizes that they both care deeply about the mission of OpenAI and put it first.

  • What are some of the specific scenarios that Mira Murati is most concerned about in terms of the potential risks of AI?

    Murati expresses concern about the potential for AI to be used to manipulate and control people's beliefs and actions, particularly in the context of democracies. She worries about the potential for AI to be used to persuade people to do specific things, potentially leading to societal control.

  • What is Mira Murati most hopeful about in terms of the potential benefits of AI?

    Murati is most hopeful about the potential for AI to revolutionize education, making high-quality and free education available to everyone, regardless of location or background. She believes that AI can personalize education, cater to individual learning styles, and accelerate human knowledge and creativity.

  • What is the significance of OpenAI's partnership with Apple?

    OpenAI's partnership with Apple is a significant milestone, marking the first time Apple has integrated a large language model into its devices. This partnership could potentially make AI more accessible to a wider audience and accelerate the adoption of AI technologies.

  • What are some of the challenges that OpenAI faces in ensuring the responsible development and deployment of AI?

    OpenAI faces a number of challenges in ensuring the responsible development and deployment of AI, including addressing concerns about data privacy, mitigating the risks of AI-powered disinformation, and balancing the pursuit of technological advancement with ethical considerations.

Show Notes

Pivot is off for the holiday! In the meantime, we're bringing you an episode of On With Kara Swisher.

Kara interviews Mira Murati, Chief Technology Officer at OpenAI, and one of the most powerful people in tech. Murati has helped the company skyrocket to the forefront of the generative AI boom, and Apple’s recent announcement that it will soon put ChatGPT in its iPhones, iPads and laptops will only help increase their reach. But there have been some issues along the way - including CEO Sam Altman's brief ouster, accusations of putting profit over safety, and the controversy over whether the company stole Scarlett Johansson's voice. Kara and Murati discuss it all.


This interview was recorded live at the Johns Hopkins University Bloomberg Center in Washington, DC as part of their new Discovery Series.

Learn more about your ad choices. Visit podcastchoices.com/adchoices

Comments 
In Channel
loading

Table of contents

00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

OpenAI CTO Mira Murati on ScarJo Controversy, Sam Altman & Disinfo Fears | On With Kara Swisher

OpenAI CTO Mira Murati on ScarJo Controversy, Sam Altman & Disinfo Fears | On With Kara Swisher

New York Magazine